skip to main content


Search for: All records

Creators/Authors contains: "Janowicz, Krzysztof"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 1, 2024
  2. Abstract. Driven by foundation models, recent progress in AI and machine learning has reached unprecedented complexity. For instance, the GPT-3 language model consists of 175 billion parameters and a training-data size of 570 GB. While it has achieved remarkable performance in generating text that is difficult to distinguish from human-authored content, a single training of the model is estimated to produce over 550 metric tons of CO2 emissions. Likewise, we see advances in GeoAI research improving large-scale prediction tasks like satellite image classification and global climate modeling, to name but a couple. While these models have not yet reached comparable complexity and emissions levels, spatio-temporal models differ from language and image-generation models in several ways that make it necessary to (re)train them more often, with potentially large implications for sustainability. While recent work in the machine learning community has started calling for greener and more energy-efficient AI alongside improvements in model accuracy, this trend has not yet reached the GeoAI community at large. In this work, we bring this issue to not only the attention of the GeoAI community but also present ethical considerations from a geographic perspective that are missing from the broader, ongoing AI-sustainability discussion. To start this discussion, we propose a framework to evaluate models from several sustainability-related angles, including energy efficiency, carbon intensity, transparency, and social implications. We encourage future AI/GeoAI work to acknowledge its environmental impact as a step towards a more resource-conscious society. Similar to the current push for reproducibility, future publications should also report the energy/carbon costs of improvements over prior work. 
    more » « less
  3. Abstract

    Qualitative spatial/temporal reasoning (QSR/QTR) plays a key role in research on human cognition, e.g., as it relates to navigation, as well as in work on robotics and artificial intelligence. Although previous work has mainly focused on various spatial and temporal calculi, more recently representation learning techniques such as embedding have been applied to reasoning and inference tasks such as query answering and knowledge base completion. These subsymbolic and learnable representations are well suited for handling noise and efficiency problems that plagued prior work. However, applying embedding techniques to spatial and temporal reasoning has received little attention to date. In this paper, we explore two research questions: (1) How do embedding-based methods perform empirically compared to traditional reasoning methods on QSR/QTR problems? (2) If the embedding-based methods are better, what causes this superiority? In order to answer these questions, we first propose a hyperbolic embedding model, called HyperQuaternionE, to capture varying properties of relations (such as symmetry and anti-symmetry), to learn inversion relations and relation compositions (i.e., composition tables), and to model hierarchical structures over entities induced by transitive relations. We conduct various experiments on two synthetic datasets to demonstrate the advantages of our proposed embedding-based method against existing embedding models as well as traditional reasoners with respect to entity inference and relation inference. Additionally, our qualitative analysis reveals that our method is able to learn conceptual neighborhoods implicitly. We conclude that the success of our method is attributed to its ability to model composition tables and learn conceptual neighbors, which are among the core building blocks of QSR/QTR.

     
    more » « less
  4. Abstract. The longer the COVID-19 pandemic lasts, the more apparent it becomes that understanding its social drivers may be as important as understanding the virus itself. One such social driver is misinformation and distrust in institutions. This is particularly interesting as the scientific process is more transparent than ever before. Numerous scientific teams have published datasets that cover almost any imaginable aspects of COVID-19 during the last two years. However, consistently and efficiently integrating and making sense of these separate data “silos” to scientists, decision makers, journalists, and more importantly the general public remain a key challenge with important implications for transparency. Several types of knowledge graphs have been published to tackle this issue and to enable data crosswalks by providing rich contextual information. Interestingly, none of these graphs has focused on COVID-19 forecasts despite them acting as the underpinning for decision making. In this work we motivate the need for exposing forecasts as a knowledge graph, showcase queries that run against the graph, and geographically interlink forecasts with indicators of economic impacts. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)